50 research outputs found

    Intent Preserving 360 Video Stabilization Using Constrained Optimization

    Get PDF
    A system and method are disclosed, that solve for rotational updates in 360 videos by removing camera shakes, while preserving user intended motions. The method uses a constrained nonlinear optimization approach in quaternion space. At first, optimal 3D camera rotations are computed between key frames. 3D camera rotations between consecutive frames are then computed. The first, second, and third derivatives of the resulting camera path are minimized, to stabilize the camera orientation path. The computation strives to find a smooth path, while also limiting its deviation from the original path. The system keeps the orientations close to the original, for example, even when the videographer takes a turn. Each frame is then warped to the stabilized path, which results in a smoother video. The rotational camera updates may be applied to the input stream at source or added as metadata. The technology may influence standards by making rotational updates metadata a component of 360 videos. KEYWORDS: 360 degree video, camera rotation, removing camera shake, computing camera rotatio

    Mixture Trees for Modeling and Fast Conditional Sampling with Applications in Vision and Graphics

    Get PDF
    ©2005 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Presented at the 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 20-25 June 2005, San Diego, CA.DOI: 10.1109/CVPR.2005.224We introduce mixture trees, a tree-based data-structure for modeling joint probability densities using a greedy hierarchical density estimation scheme. We show that the mixture tree models data efficiently at multiple resolutions, and present fast conditional sampling as one of many possible applications. In particular, the development of this datastructure was spurred by a multi-target tracking application, where memory-based motion modeling calls for fast conditional sampling from large empirical densities. However, it is also suited to applications such as texture synthesis, where conditional densities play a central role. Results will be presented for both these applications

    State of the Art in Example-based Texture Synthesis

    Get PDF
    International audienceRecent years have witnessed significant progress in example-based texture synthesis algorithms. Given an example texture, these methods produce a larger texture that is tailored to the user's needs. In this state-of-the-art report, we aim to achieve three goals: (1) provide a tutorial that is easy to follow for readers who are not already familiar with the subject, (2) make a comprehensive survey and comparisons of different methods, and (3) sketch a vision for future work that can help motivate and guide readers that are interested in texture synthesis research. We cover fundamental algorithms as well as extensions and applications of texture synthesis

    Design of 2D Time-Varying Vector Fields

    Full text link

    Lagrangian Neural Style Transfer for Fluids

    Full text link
    Artistically controlling the shape, motion and appearance of fluid simulations pose major challenges in visual effects production. In this paper, we present a neural style transfer approach from images to 3D fluids formulated in a Lagrangian viewpoint. Using particles for style transfer has unique benefits compared to grid-based techniques. Attributes are stored on the particles and hence are trivially transported by the particle motion. This intrinsically ensures temporal consistency of the optimized stylized structure and notably improves the resulting quality. Simultaneously, the expensive, recursive alignment of stylization velocity fields of grid approaches is unnecessary, reducing the computation time to less than an hour and rendering neural flow stylization practical in production settings. Moreover, the Lagrangian representation improves artistic control as it allows for multi-fluid stylization and consistent color transfer from images, and the generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials: http://www.byungsoo.me/project/lnst/index.htm

    Example-based Rendering of Textural Phenomena

    Get PDF
    This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.Ph.D.Committee Chair: Bobick, Aaron; Committee Co-Chair: Essa, Irfan; Committee Member: Raskar, Ramesh; Committee Member: Rossignac, Jarek; Committee Member: Seitz, Steve; Committee Member: Turk, Gre
    corecore